965 research outputs found

    Predictive validity of the CriSTAL tool for short-term mortality in older people presenting at Emergency Departments: a prospective study

    Get PDF
    © 2018, The Author(s). Abstract: To determine the validity of the Australian clinical prediction tool Criteria for Screening and Triaging to Appropriate aLternative care (CRISTAL) based on objective clinical criteria to accurately identify risk of death within 3 months of admission among older patients. Methods: Prospective study of ≥ 65 year-olds presenting at emergency departments in five Australian (Aus) and four Danish (DK) hospitals. Logistic regression analysis was used to model factors for death prediction; Sensitivity, specificity, area under the ROC curve and calibration with bootstrapping techniques were used to describe predictive accuracy. Results: 2493 patients, with median age 78–80 years (DK–Aus). The deceased had significantly higher mean CriSTAL with Australian mean of 8.1 (95% CI 7.7–8.6 vs. 5.8 95% CI 5.6–5.9) and Danish mean 7.1 (95% CI 6.6–7.5 vs. 5.5 95% CI 5.4–5.6). The model with Fried Frailty score was optimal for the Australian cohort but prediction with the Clinical Frailty Scale (CFS) was also good (AUROC 0.825 and 0.81, respectively). Values for the Danish cohort were AUROC 0.764 with Fried and 0.794 using CFS. The most significant independent predictors of short-term death in both cohorts were advanced malignancy, frailty, male gender and advanced age. CriSTAL’s accuracy was only modest for in-hospital death prediction in either setting. Conclusions: The modified CriSTAL tool (with CFS instead of Fried’s frailty instrument) has good discriminant power to improve prognostic certainty of short-term mortality for ED physicians in both health systems. This shows promise in enhancing clinician’s confidence in initiating earlier end-of-life discussions

    Estimation of required sample size for external validation of risk models for binary outcomes

    Get PDF
    Risk-prediction models for health outcomes are used in practice as part of clinical decision-making, and it is essential that their performance be externally validated. An important aspect in the design of a validation study is choosing an adequate sample size. In this paper, we investigate the sample size requirements for validation studies with binary outcomes to estimate measures of predictive performance (C-statistic for discrimination and calibration slope and calibration in the large). We aim for sufficient precision in the estimated measures. In addition, we investigate the sample size to achieve sufficient power to detect a difference from a target value. Under normality assumptions on the distribution of the linear predictor, we obtain simple estimators for sample size calculations based on the measures above. Simulation studies show that the estimators perform well for common values of the C-statistic and outcome prevalence when the linear predictor is marginally Normal. Their performance deteriorates only slightly when the normality assumptions are violated. We also propose estimators which do not require normality assumptions but require specification of the marginal distribution of the linear predictor and require the use of numerical integration. These estimators were also seen to perform very well under marginal normality. Our sample size equations require a specified standard error (SE) and the anticipated C-statistic and outcome prevalence. The sample size requirement varies according to the prognostic strength of the model, outcome prevalence, choice of the performance measure and study objective. For example, to achieve an SE < 0.025 for the C-statistic, 60-170 events are required if the true C-statistic and outcome prevalence are between 0.64-0.85 and 0.05-0.3, respectively. For the calibration slope and calibration in the large, achieving SE < 0.15   would require 40-280 and 50-100 events, respectively. Our estimators may also be used for survival outcomes when the proportion of censored observations is high

    Extensions to decision curve analysis, a novel method for evaluating diagnostic tests, prediction models and molecular markers

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Decision curve analysis is a novel method for evaluating diagnostic tests, prediction models and molecular markers. It combines the mathematical simplicity of accuracy measures, such as sensitivity and specificity, with the clinical applicability of decision analytic approaches. Most critically, decision curve analysis can be applied directly to a data set, and does not require the sort of external data on costs, benefits and preferences typically required by traditional decision analytic techniques.</p> <p>Methods</p> <p>In this paper we present several extensions to decision curve analysis including correction for overfit, confidence intervals, application to censored data (including competing risk) and calculation of decision curves directly from predicted probabilities. All of these extensions are based on straightforward methods that have previously been described in the literature for application to analogous statistical techniques.</p> <p>Results</p> <p>Simulation studies showed that repeated 10-fold crossvalidation provided the best method for correcting a decision curve for overfit. The method for applying decision curves to censored data had little bias and coverage was excellent; for competing risk, decision curves were appropriately affected by the incidence of the competing risk and the association between the competing risk and the predictor of interest. Calculation of decision curves directly from predicted probabilities led to a smoothing of the decision curve.</p> <p>Conclusion</p> <p>Decision curve analysis can be easily extended to many of the applications common to performance measures for prediction models. Software to implement decision curve analysis is provided.</p

    Validation and Recalibration of Two Multivariable Prognostic Models for Survival and Independence in Acute Stroke

    Get PDF
    Introduction Various prognostic models have been developed for acute stroke, including one based on age and five binary variables (‘six simple variables’ model; SSVMod) and one based on age plus scores on the National Institutes of Health Stroke Scale (NIHSSMod). The aims of this study were to externally validate and recalibrate these models, and to compare their predictive ability in relation to both survival and independence. Methods Data from a large clinical trial of oxygen therapy (n = 8003) were used to determine the discrimination and calibration of the models, using C-statistics, calibration plots, and Hosmer-Lemeshow statistics. Methods of recalibration in the large and logistic recalibration were used to update the models. Results For discrimination, both models functioned better for survival (C-statistics between .802 and .837) than for independence (C-statistics between .725 and .735). Both models showed slight shortcomings with regard to calibration, over-predicting survival and under-predicting independence; the NIHSSMod performed slightly better than the SSVMod. For the most part, there were only minor differences between ischaemic and haemorrhagic strokes. Logistic recalibration successfully updated the models for a clinical trial population. Conclusions Both prognostic models performed well overall in a clinical trial population. The choice between them is probably better based on clinical and practical considerations than on statistical considerations

    Prediction of intracranial findings on CT-scans by alternative modelling techniques

    Get PDF
    Background: Prediction rules for intracranial traumatic findings in patients with minor head injury are designed to reduce the use of computed tomography (CT) without missing patients at risk for complications. This study investigates whether alternative modelling techniques might improve the applicability and simplicity of such prediction rules. Methods. We included 3181 patients with minor head injury who had received CT scans be

    The Value of Preseason Screening for Injury Prediction: The Development and Internal Validation of a Multivariable Prognostic Model to Predict Indirect Muscle Injury Risk in Elite Football (Soccer) Players

    Get PDF
    © 2020, The Author(s). Background: In elite football (soccer), periodic health examination (PHE) could provide prognostic factors to predict injury risk. Objective: To develop and internally validate a prognostic model to predict individualised indirect (non-contact) muscle injury (IMI) risk during a season in elite footballers, only using PHE-derived candidate prognostic factors. Methods: Routinely collected preseason PHE and injury data were used from 152 players over 5 seasons (1st July 2013 to 19th May 2018). Ten candidate prognostic factors (12 parameters) were included in model development. Multiple imputation was used to handle missing values. The outcome was any time-loss, index indirect muscle injury (I-IMI) affecting the lower extremity. A full logistic regression model was fitted, and a parsimonious model developed using backward-selection to remove factors that exceeded a threshold that was equivalent to Akaike’s Information Criterion (alpha 0.157). Predictive performance was assessed through calibration, discrimination and decision-curve analysis, averaged across all imputed datasets. The model was internally validated using bootstrapping and adjusted for overfitting. Results: During 317 participant-seasons, 138 I-IMIs were recorded. The parsimonious model included only age and frequency of previous IMIs; apparent calibration was perfect, but discrimination was modest (C-index = 0.641, 95% confidence interval (CI) = 0.580 to 0.703), with clinical utility evident between risk thresholds of 37–71%. After validation and overfitting adjustment, performance deteriorated (C-index = 0.589 (95% CI = 0.528 to 0.651); calibration-in-the-large = − 0.009 (95% CI = − 0.239 to 0.239); calibration slope = 0.718 (95% CI = 0.275 to 1.161)). Conclusion: The selected PHE data were insufficient prognostic factors from which to develop a useful model for predicting IMI risk in elite footballers. Further research should prioritise identifying novel prognostic factors to improve future risk prediction models in this field. Trial registration: NCT03782389
    • …
    corecore